Adaptive Feedforward Neural Network Control With an Optimized Hidden Node Distribution

نویسندگان

چکیده

Composite adaptive radial basis function neural network (RBFNN) control with a lattice distribution of hidden nodes has three inherent demerits: 1) the approximation domain RBFNNs is difficult to be determined a priori ; 2) only partial persistence excitation (PE) condition can guaranteed; 3) in general, required number enormous. This article proposes an feedforward RBFNN controller optimized suitably address above demerits. The calculated by K-means algorithm optimally distributed along desired state trajectory. satisfies PE for periodic reference weights all will converge optimal values. proposed method considerably reduces nodes, while achieving better ability. scheme shares similar rationality that classical PID two special cases, which thus seen as enhanced For implemented digital devices, method, manipulator unknown dynamics, potentially achieves performance than model-based schemes accurate dynamics. Simulation results demonstrate effectiveness scheme. result provides deeper insight into coordination and deterministic learning theory. xmlns:xlink="http://www.w3.org/1999/xlink">Impact Statement —Adaptive learns robot when both structures parameters target are advance. Unfortunately, current controllers need large-scale approximate dynamics manipulator, cannot guaranteed converge. this not scale networks substantially alleviate computational burden but also evidently performance. examples show increases accuracy more nine times 35 compared traditional Intuitively, people usually believe dynamic model may achieve best However, model, even further 1.5 times. technology straightforward path engineers, who experts complicated system analysis methods, design robotic

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning of a single-hidden layer feedforward neural network using an optimized extreme learning machine

This paper proposes a learning framework for single-hidden layer feedforward neural networks (SLFN) called optimized extreme learning machine (O-ELM). In O-ELM, the structure and the parameters of the SLFN are determined using an optimization method. The output weights, like in the batch ELM, are obtained by a least squares algorithm, but using Tikhonov’s regularization in order to improve the ...

متن کامل

New RBF neural network classifier with optimized hidden neurons number

This article presents a noticeable performances improvement of a neural classifier based on an RBF network. Based on the Mahalanobis distance, this new classifier increases relatively the recognition rate while decreasing remarkably the number of hidden layer neurons. We obtain thus a new very general RBF classifier, very simple, not requiring any adjustment parameter, and presenting an excelle...

متن کامل

Prediction of Cardiovascular Diseases Using an Optimized Artificial Neural Network

Introduction:  It is of utmost importance to predict cardiovascular diseases correctly. Therefore, it is necessary to utilize those models with a minimum error rate and maximum reliability. This study aimed to combine an artificial neural network with the genetic algorithm to assess patients with myocardial infarction and congestive heart failure.   Materials & Methods: This study utilized a m...

متن کامل

Direct Adaptive Control Using Feedforward Neural Networks

This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different ...

متن کامل

A new feedforward neural network hidden layer neuron pruning algorithm

This paper deals with a new approach to detect the structure (i.e. determination of the number of hidden units) of a feedforward neural network (FNN). This approach is based on the principle that any FNN could be represented by a Volterra series such as a nonlinear inputoutput model. The new proposed algorithm is based on the following three steps: first, we develop the nonlinear activation fun...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on artificial intelligence

سال: 2021

ISSN: ['2691-4581']

DOI: https://doi.org/10.1109/tai.2021.3074106